I have been struggling with this one all day, I am trying to create a Function App function key from an ARM template.
So far I have been able to create my function key on the Host level using the following template:
{
"type": "Microsoft.Web/sites/host/functionKeys",
"apiVersion": "2018-11-01",
"name": "[concat(parameters('appServiceName'), '/default/PortalFunctionKey')]",
"properties": {
"name": "PortalFunctionKey"
}
then I found a couple of articles and link showing it is possible via API:
https://github.com/Azure/azure-functions-host/wiki/Key-management-API
And I was able to generate it via this API posting to:
https://{myfunctionapp}.azurewebsites.net/admin/functions/{MyFunctionName}/keys/{NewKeyName}?code={_masterKey}
But I can't for the sake of me figure out how to do that in my ARM template!
I have tried various combinations of type and name, for example:
{
"type": "Microsoft.Web/sites/host/functionKeys",
"apiVersion": "2018-11-01",
"name": "[concat(parameters('appServiceName'), '/{myfunctionName}/PortalFunctionKey')]",
"properties": {
"name": "PortalFunctionKey"
}
or /functions/{myfunctionName}/PortalFunctionKey
as suggested in some articles, and i just can't get any to work, can't find much documentation on ARM Microsoft.Web/sites/host/functionKeys either.
Did anyone succeed to create a FUNCTION key (not host) in ARM template? I would gladly hear how you got there :)!
Basically:
Many thanks in advance,
Emmanuel
This might solve the problem now: https://learn.microsoft.com/en-us/azure/templates/microsoft.web/2020-06-01/sites/functions/keys
I was able to create a function key by passing an ARM template resource that looked like this:
{
"type": "Microsoft.Web/sites/functions/keys",
"apiVersion": "2020-06-01",
"name": "{Site Name}/{Function App Name}/{Key Name}",
"properties": {
"value": "(optional)-key-value-can-be-passed-here"
}
}
It seems that even if you don't have 'value' in the properties, then you have to at least pass an empty properties object.
To solve this problem , you could refer to this github issue. First issue and second issue, these issues all have the problem about how to get the function key in the ARM template.
For now the sample to get the key value as the below shows:
"properties": {
"contentType": "text/plain",
"value": "[listkeys(concat(variables('functionAppId'), '/host/default/'),'2016-08-01').functionKeys.default]"
}
or with below to get the key object.
"functionkeys": {
"type": "object",
"value": "[listkeys(concat(variables('functionAppId'), '/host/default'), '2018-11-01')]" }
}
You could have a try, hope this could help you.
So it is not possible to create a Function level Function key in ARM template at the moment.
I therefore created a feature request you can vote on if you are interested in it:
https://feedback.azure.com/forums/169385-web-apps/suggestions/39789043-create-function-level-keys-for-azure-functions-in
For now though, we are creating function level function keys via a powershell deployment task step. here is how.
add output parameter to your ARM template:
"outputs": {
"masterKey": {
"type": "string",
"value": "[listkeys(concat(resourceId(resourceGroup().name, 'Microsoft.Web/sites', parameters('appServiceName')), '/host/default'), '2018-11-01').masterKey]"
},
"appServiceName": {
"type": "string",
"value": "[parameters('appServiceName')]"
},
"functionKeys": {
"type": "array",
"value": [
{
"functionName": "myFunctionName",
"keys": [ "FunctionKeyName1", "FunctionKeyName2" ]
}
]
}
}
Add the following PS script file to your deploy project:
param (
[Parameter(Mandatory=$true)]
[string]
$armOutputString
)
Write-Host $armOutputString
$armOutputObj = $armOutputString | convertfrom-json
Write-Host $armOutputObj
$masterKey = $armOutputObj.masterKey.value
$appServiceName = $armOutputObj.appServiceName.value
$httpHeaders = #{
"x-functions-key" = $masterKey
}
$contentType = "application/json; charset=utf-8"
foreach($function in $armOutputObj.functionKeys.value){
$retryCount = 5;
while ($true) {
try {
$uriBase = "https://$($appServiceName).azurewebsites.net/admin/functions/$($function.functionName)/keys"
$existingKeys = Invoke-RestMethod -Method Get -Uri $uriBase -Headers $httpHeaders -ContentType $contentType
break;
}
catch {
if ($_.Exception.Response.StatusCode.value__ -eq 502) {
if ($retryCount-- -eq 0) {
throw;
}
else {
Write-Output ("Retry" + ": " + $_.Exception.Response.StatusCode + "; attempts=$retryCount")
[System.Threading.Thread]::Sleep(1000);
continue;
}
}
else {
throw;
}
}
}
foreach ($keyname in $function.keys){
$keyExists = 0
foreach($exstingKey in $existingKeys.keys){
if($exstingKey.name -eq $keyname){
$keyExists = 1
Write-Host "key $($keyname) already exists"
}
}
if($keyExists -eq 0){
$uri = "$($uriBase)/$($keyname)?code=$($masterKey)";
Write-Host "Adding $($keyname) key"
Invoke-RestMethod -Method Post -Uri "$($uriBase)/$($keyname)" -Headers $httpHeaders -ContentType $contentType;
}
}
}
The script ensures that the key will not be overwritten if it already exists, and will retry if it fails (the function MUST exist and be running for the API to work).
In your "Azure resource group deployment" task, under "Advanced" set "Deployment outputs" to "armDeployOutput".
Add a Powershell task, your script path to the powershell file in your deploy project, and set "Arguments" as "-armOutputString '$(armDeployOutput)'".
And that's it.
Related
Using the Devops pushes endpoint like _apis/git/repositories/<Project>/pushes?api-version=6.0 we can rename or edit a file.
This is working no problem. However, I want to rename and edit a file in a single commit. I've tried passing two changes in a single request, like:
{
"changes": [
{
"changeType": "rename",
"item": {
"path": "/path/new-filename.txt"
},
"sourceServerItem": "/path/old-filename.txt"
},
{
"changeType": "edit",
"item": {
"path": "/path/new-filename.txt"
},
"newContent": {
"content": "...new content...",
"contentType": "rawtext"
}
}
]
}
This gave the error "Multiple operations were attempted on file 'path/new-filename.txt' within the same request. Only a single operation may be applied to each file within the same commit. Parameter name: newPush"
So I tried combining them with the change type of 'all'
{
"changeType": "all",
"item": {
"path": "/path/new-filename.txt",
},
"sourceServerItem": "/path/old-filename.txt",
"newContent": {
"content": "...new content...",
"contentType": "rawText"
}
}
Still no joy: "The parameters supplied are not valid. Parameter name: newPush"
Is this possible, or do I have to separate the changes in two commits?
Edit:
Can't even do this with multiple commits in one request ðŸ˜. I mean what's the point of having commits as an array when you must have exactly one commit anyway?
The parameters are incorrect. A posted push must contain exactly one commit and one refUpdate.
Parameter name: newPush
Test with the Rest API and I can reproduce the same situation.
The issue comes from the limitations of the Rest API itself. When we use the Rest API to do the push action, it limits that you can only make commit once and changes cannot be made to the same file at the same time.
You need to separate the changes in two commits.
To meet your requirement, you can use PowerShell Script to run the Rest API twice to commit the changes.
For example:
$token = "PAT"
$url="https://dev.azure.com/{Org}/{Project}/_apis/git/repositories/{Repo}/pushes?api-version=6.0"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($token)"))
$JSON = #'
{
"refUpdates": [
{
"name": "refs/heads/master",
"oldObjectId": "commitid"
}
],
"commits": [
{
"comment": "Renaming tasks.md to xxx",
"changes": [
{
"changeType": "rename",
"sourceServerItem": "/path/old-filename.txt",
"item": {
"path": "/path/new-filename.txt"
}
}
]
}
]
}
'#
$response = Invoke-RestMethod -Uri $url -Headers #{Authorization = "Basic $token"} -Method Post -Body $JSON -ContentType application/json
$newobjectid = $response.commits.commitid
echo $newobjectid
$JSON1 = "
{
`"refUpdates`": [
{
`"name`": `"refs/heads/master`",
`"oldObjectId`": `"$newobjectid`"
}
],
`"commits`": [
{
`"comment`": `"Renaming tasks.md to xxx`",
`"changes`": [
{
`"changeType`": `"edit`",
`"item`": {
`"path`": `"/path/new-filename.txt`"
},
`"newContent`": {
`"content`": `"...new content...`",
`"contentType`": `"rawtext`"
}
}
]
}
]
}
"
$response1 = Invoke-RestMethod -Uri $url -Headers #{Authorization = "Basic $token"} -Method Post -Body $JSON1 -ContentType application/json
I'm new to Azure Functions, so it might be something obvious.
Here's what it is:
I have a Powershell Azure Functions HTTP Trigger Function with Pode, which has a GET and a POST route. Now when I send a POST request via Postman, Invoke-WebRequest or any other tool except Azure Test Tool, I end up in the GET route.
My debugging revealed that $TriggerMetadata contains '"Method": "GET"' in these cases. '"Method": "POST"' only when the request comes from Azure Test Tool itself.
I am faced with a riddle. I hope someone can help me.
My Code:
param($Request, $TriggerMetadata)
$endpoint = '/api/Object'
Write-Host "$endpoint - PowerShell HTTP trigger function processed a request."
Start-PodeServer -Request $TriggerMetadata -ServerlessType AzureFunctions {
# get route that can return data
Add-PodeRoute -Method Get -Path $endpoint -ScriptBlock {
Write-Host "$endpoint - Get"
#doing stuff
}
# post route to create some data
Add-PodeRoute -Method Post -Path $endpoint -ScriptBlock {
Write-Host "$endpoint - Post"
#doing stuff
}
}
My function.json:
{
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
]
}
I am very sorry, especially since this is very unsatisfactory, but the problem no longer exists.
I strongly suspect that it was a bug in Azure, since I did not change anything and everything is working again.
Time to check the Azure Functions SLA I guesse.
I have looked through the documentation available at -
https://learn.microsoft.com/en-us/rest/api/azure/devops/security/?view=azure-devops-rest-5.1
For a certain build definition, I want to be able to get all the security groups and users associated with it through code/powershell script and output it to a json file for example.
Any help will be appreciated! I have tried curling multiple api's but not getting what I need.
Thanks!
There has one api does not been documented. Try with below:
POST https://dev.azure.com/{org name}/_apis/Contribution/HierarchyQuery/project/{project name}?api-version=5.0-preview.1
Request body:
{
"contributionIds": [
"ms.vss-admin-web.security-view-members-data-provider"
],
"dataProviderContext": {
"properties": {
"permissionSetId": "33344d9c-fc72-4d6f-aba5-fa317101a7e9",
"permissionSetToken": "{token}",
"sourcePage": {
"url": "https://dev.azure.com/{org name}/{project name}/_build?definitionId={build definition id}&_a=summary",
"routeId": "ms.vss-build-web.pipeline-details-route",
"routeValues": {
"project": "{project name}",
"viewname": "details",
"controller": "ContributedPage",
"action": "Execute",
"serviceHost": "{org name}"
}
}
}
}
}
Some key points you should pay attention to:
permissionSetId: Here the 33344d9c-fc72-4d6f-aba5-fa317101a7e9 is
a fixed value which represent the namespaceid of build security.
permissionSetToken: This is the token which can used to get the
security info. You can run below command to get the token(s) you
should used.
az devops security permission list --id
33344d9c-fc72-4d6f-aba5-fa317101a7e9 --subject {your account}
--output table --organization https://dev.azure.com/{org name} --project {project name}
url: Here the url value used to tell the system which specific
build you want to check. Just replace the corresponding org
name/project name/definition id into the URL sample provided.
In addition, I wrote a shot powershell script for you:
$token = "{token}"
$url="https://dev.azure.com/{org name}/_apis/Contribution/HierarchyQuery/project/{project name}?api-version=5.0-preview.1"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($token)"))
$context=#"
{
"contributionIds": [
"ms.vss-admin-web.security-view-members-data-provider"
],
"dataProviderContext": {
"properties": {
"permissionSetId": "33344d9c-fc72-4d6f-aba5-fa317101a7e9",
"permissionSetToken": "{token}",
"sourcePage": {
"url": "https://dev.azure.com/{org name}/{project name}/_build?definitionId={build definition id}&_a=summary",
"routeId": "ms.vss-build-web.pipeline-details-route",
"routeValues": {
"project": "{project name}",
"viewname": "details",
"controller": "ContributedPage",
"action": "Execute",
"serviceHost": "{org name}"
}
}
}
}
}
"#
$response = Invoke-RestMethod -Uri $url -Headers #{Authorization = "Basic $token"} -Method Post -Body $context -ContentType "application/json"
Write-Host "results = $($response.dataProviders.'ms.vss-admin-web.security-view-members-data-provider'.identities.displayname| ConvertTo-Json -Depth 100)"
I am in the middle of creating a function to schedule node pool scaling on azure.
This was fairly easy using the AKS Module and creating a function to scale the nodepool, but now the development team has started using multuple node pools in the same kubernetes service, usually i would just use the Set-AzAks as follow
Set-AzAks -Name <name> -ResourceGroupName <rgname> -NodeCount 1
But i seem unable to specify individual node pools in the command. I have been able to use the az CLI tool to get the functionality i want doing it manually, but I really want to use the azure automation account to do this.
Any help would be appreciated
This is an issue which already existed in the Github. So I think it's not a good way to use the PowerShell command Set-AzAks to scale the AKS nodes count in the current situation.
For this, I recommend you to use the Azure REST API Managed Clusters - Create Or Update through PowerShell, it will also work perfectly as the Azure CLI commands for you.
Update:
As you wish, I will show you the example below:
$body = '{
"location": "eastus",
"properties": {
"kubernetesVersion": "1.14.6",
"dnsPrefix": "xxxxx",
"agentPoolProfiles": [
{
"count": 2,
"vmSize": "Standard_DS2_v2",
"osDiskSizeGB": 100,
"vnetSubnetID": "xxxxxxxx",
"maxPods": 30,
"osType": "Linux",
"type": "AvailabilitySet",
"orchestratorVersion": "1.14.6",
"name": "agentpool"
}
],
"addonProfiles": {
"httpapplicationrouting": {
"enabled": false,
"config": {}
},
"omsagent": {
"enabled": true,
"config": {
"loganalyticsworkspaceresourceid": "xxxxxxxx"
}
}
},
"nodeResourceGroup": "xxxxxxxxx",
"enableRBAC": true,
"networkProfile": {
"networkPlugin": "azure",
"serviceCidr": "10.1.0.0/16",
"dnsServiceIP": "10.1.0.10",
"dockerBridgeCidr": "172.17.0.1/16",
"loadBalancerSku": "Basic"
}
}
}'
$requestUri = "https://management.azure.com/subscriptions/{subscription_id}/resourceGroups/{your_group_name}/providers/Microsoft.ContainerService/managedClusters/{your_cluster_name}?api-version=2019-08-01"
$accessToken = "xxxxxxx"
Invoke-RestMethod -Headers #{Authorization = "Bearer $accessToken"} -Uri $requestUri -Method PUT -ContentType 'application/json' -Body $body
You can change the context in the body as you need and the properties describe in the REST API.
Following on from this question, I need to go a step further and be able to create multiple data disks of different sizes, where quantity and sizes are specified at deploy-time.
My latest incarnation is creating the (managed) disks in their own resource outside the VM resource and then trying to attach them.
It seems that the copyindex resets for each resource so I believe I need to create them all in one copy so the "attach" part in the VM resource can use the length function, but I cannot think of a way to change any property inside the copy loop when a certain iteration is reached (I would understand why this isn't possible).
I'm thinking I need to use something like:
"count": "[variables('numberOfDisks')[parameters('DiskSize')]]"
But unsure how to proceed.
I've also thought about nested templates, but again, this falls foul of not being able to change parameters inside a loop.
In programming, I could create a 2d array or dictionary object, but cannot find a way to do this in ARM templates, although, I have just found Intersection.
Datadisk configuration examples:
It is only the size and quantity that vary per deployment. All other properties are anticipated to be the same for all disks for any given VM.
VM 1: 2x # 256GB, 4x # 512GB, 4x # 1023GB
VM 2: 1x # 1023GB, 1x # 80GB
VM 3: 1x # 1023GB, 1x # 80GB, 2x # 256GB, 2x # 512GB
My template only deploys one VM, but the number and size of disks are unknown. The idea as that DSC will come along and create volumes, collating the disks based on their size.
I'm not going to paste my workings as they are long, wrong and bulk out this post. Hopefully the above is enough to prove I have been trying to work things out for myself.
So I've managed to achieve it. Possibly not the most elegant, but it works. Although Microsoft seem to suggest using a top-level resource to create the datadisks, I can't see how this will work as I don't know of a way to use copy[] inside DependsOn[] which is required if you are creating the disks and vm in the same template, they'll try and deploy simultaneously.
For those that may be interested, here is my solution:
Firstly, I am triggering the template using PowerShell's New-AzureRmResourceDeployment. I'm not using a parameters file. The parameters are generated in PS.
$RG = "ResourceGroup where VM resides"
$Disks = #(
#{name = "datadisk-001";diskSizeGB = "256";lun = 0}
#{name = "datadisk-002";diskSizeGB = "256";lun = 1}
#{name = "datadisk-003";diskSizeGB = "512";lun = 2}
#{name = "datadisk-004";diskSizeGB = "512";lun = 3}
#{name = "datadisk-005";diskSizeGB = "512";lun = 4}
#{name = "datadisk-006";diskSizeGB = "512";lun = 5}
)
$params = #{
diskConfig = $disks
storageAccounttype = "Standard_LRS"
vmName = "AUCADN102007006"
}
New-AzureRmResourceGroupDeployment -Name "SomeDeploymentName" `
-ResourceGroupName $RG `
-Mode Incremental `
-DeploymentDebugLogLevel All `
-TemplateFile C:\Temp\DiskTest.json" `
-Verbose `
#params
The template itself is seriously cut down, and doesn't actually create the VM. The referenced VM needs to exist. I've taken out as much as possible.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vmName": {
"type": "string"
},
"diskConfig": {
"type": "array"
},
"storageAccountType": {
"type": "string",
"defaultValue": "Standard_LRS",
"allowedValues": [
"Standard_LRS",
"Premium_LRS"
],
"metadata": {
"description": "Type of disk"
}
}
},
"variables": {
"vmSize": "Standard_DS4_v2",
"sharedVariables": {
"storageAccountType": "[parameters('storageAccountType')]"
}
},
"resources": [
{
"apiVersion": "2017-03-30",
"type": "Microsoft.Compute/virtualMachines",
"name": "[parameters('vmName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
],
"properties": {
"storageProfile": {
"copy": [
{
"name": "dataDisks",
"count": "[length(parameters('diskConfig'))]",
"input": {
"name": "[concat(parameters('vmName'),'-',parameters('diskConfig')[CopyIndex('dataDisks')].name)]",
"diskSizeGB": "[parameters('diskConfig')[CopyIndex('dataDisks')].diskSizeGB]",
"lun": "[parameters('diskConfig')[copyIndex('dataDisks')].lun]",
"createOption": "Empty",
"managedDisk": {
"storageAccountType": "[variables('sharedVariables').storageAccountType]"
}
}
}
]
}
}
}
],
"outputs": {
"arrayOutput1": {
"type": "array",
"value": "[parameters('diskConfig')]"
},
"arrayCount": {
"type": "int",
"value": "[length(parameters('diskConfig'))]"
}
}
}
Thanks to this post where the author demonstrates the use of indexing:
"properties": {
"accountType": " [parameters('storageAccountList')[copyIndex()].storageAccountType]"
}
Note how copyIndex() is in [ ]
ToDo: Do something better with $Disks, either using PS to create the hashtable or create it inside the template.
HTH