Azure metric alert dimension operator ARM-Template - azure

I want to exclude a Virtual Machine to an alert rule. I use ARM-Templates to deploy my alerts. The problem is that exclude won't work.
"dimensions": [
{
"name": "Computer",
"operator": "Exclude",
"values": [ "VMname" ]
},
ISSUE: If I choose Exclude as operator. It performs the same as Include.
Anyone has the same Issue?

Related question is raised in this MSDN thread; Just sharing this for the benefit of broader audience who might face similar issue.

Related

Azure Marketplace image not found

I'm deploying Linux images using Terraform. To do that, we need to get the image purchase plan info from Azure Marketplace images (some require terms be accepted and some don't). Microsoft has an instructional doc on how to do this: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage. Great. The problem is that it does not work on some images some of our teams want to deploy and I can't see why, so I'm stuck not knowing how to deploy the images they've asked for.
Here's an example where finding the purchase plan info of a Checkpoint image works and I can accept the marketplace terms successfully. Notice the "plan" block information from the first command and then the terms showing as "accepted" in the second command:
scott#Azure:~$ az vm image show --urn checkpoint:check-point-cg-r81:mgmt-byol:latest
{
"automaticOsUpgradeProperties": {
"automaticOsUpgradeSupported": false
},
"dataDiskImages": [],
"disallowed": {
"vmDiskType": "None"
},
"extendedLocation": null,
"features": null,
"hyperVGeneration": "V1",
"id": "/Subscriptions/5ff78d61-5262-4bd6-81fa-42d8723b8e3e/Providers/Microsoft.Compute/Locations/westus/Publishers/checkpoint/ArtifactTypes/VMImage/Offers/check-point-cg-r81/Skus/mgmt-byol/Versions/8100.900392.0710",
"location": "westus",
"name": "8100.900392.0710",
"osDiskImage": {
"operatingSystem": "Linux",
"sizeInBytes": 107374182912,
"sizeInGb": 100
},
"plan": {
"name": "mgmt-byol",
"product": "check-point-cg-r81",
"publisher": "checkpoint"
},
"tags": null
}
scott#Azure:~$ az vm image terms show --urn checkpoint:check-point-cg-r81:mgmt-byol:latest
{
"accepted": true,
"id": "/subscriptions/5ff78d61-5262-4bd6-81fa-42d8723b8e3e/providers/Microsoft.MarketplaceOrdering/offerTypes/VirtualMachine/publishers/checkpoint/offers/check-point-cg-r81/plans/mgmt-byol/agreements/current",
"licenseTextLink": "https://mpcprodsa.blob.core.windows.net/legalterms/3E5ED_legalterms_CHECKPOINT%253a24CHECK%253a2DPOINT%253a2DCG%253a2DR81%253a24MGMT%253a2DBYOL%253a24U2R6YKHF2KWHXN7Y4Q4Q4OEKEYL6JZJCCZGIIGQBSB7FNDUBYTDIRQY6QPT5XMT7NGAH5XWH3LHSQY22URTFS3X7HZHQXZ3CIVJKC2Y.txt",
"marketplaceTermsLink": "https://mpcprodsa.blob.core.windows.net/marketplaceterms/3EDEF_marketplaceterms_VIRTUALMACHINE%253a24AAK2OAIZEAWW5H4MSP5KSTVB6NDKKRTUBAU23BRFTWN4YC2MQLJUB5ZEYUOUJBVF3YK34CIVPZL2HWYASPGDUY5O2FWEGRBYOXWZE5Y.txt",
"name": "mgmt-byol",
"plan": "mgmt-byol",
"privacyPolicyLink": "http://www.checkpoint.com/privacy",
"product": "check-point-cg-r81",
"publisher": "checkpoint",
"retrieveDatetime": "2021-07-21T13:48:54.3464069Z",
"signature": "R65W6K5QQIRJP7DUOIK26PND236FGY6YIVTOOJ3ZFZC2CRQGPNF5TA5BNANFJWTFRKFZULYKINVSJ2BIB2DDNRW5AMUS2N5KQR7YTBQ",
"systemData": {
"createdAt": "2021-07-21T13:48:54.417391+00:00",
"createdBy": "5ff78d61-5262-4bd6-81fa-42d8723b8e3e",
"createdByType": "ManagedIdentity",
"lastModifiedAt": "2021-07-21T13:48:54.417391+00:00",
"lastModifiedBy": "5ff78d61-5262-4bd6-81fa-42d8723b8e3e",
"lastModifiedByType": "ManagedIdentity"
},
"type": "Microsoft.MarketplaceOrdering/offertypes"
}
Now, using the exact same method, the one Microsoft prescribes in their own documentation, I can get an RHEL image, but then when I try to accept the terms it errors out that the image is not found. For all intents and purposes, the output of the first command has no appreciable difference from the Checkpoint image that worked as expected. Notice also I included the location information just to ensure the image was available in the intended region.
scott#Azure:~$ az vm image show -l westeurope --urn redhat:rhel-byos:rhel-lvm83:latest
{
"automaticOsUpgradeProperties": {
"automaticOsUpgradeSupported": false
},
"dataDiskImages": [],
"disallowed": {
"vmDiskType": "None"
},
"extendedLocation": null,
"features": [
{
"name": "IsAcceleratedNetworkSupported",
"value": "True"
}
],
"hyperVGeneration": "V1",
"id": "/Subscriptions/5ff78d61-5262-4bd6-81fa-42d8723b8e3e/Providers/Microsoft.Compute/Locations/westeurope/Publishers/redhat/ArtifactTypes/VMImage/Offers/rhel-byos/Skus/rhel-lvm83/Versions/8.3.20210409",
"location": "westeurope",
"name": "8.3.20210409",
"osDiskImage": {
"operatingSystem": "Linux",
"sizeInBytes": 68719477248,
"sizeInGb": 64
},
"plan": {
"name": "rhel-lvm83",
"product": "rhel-byos",
"publisher": "redhat"
},
"tags": null
}
scott#Azure:~$ az vm image terms show --urn redhat:rhel-byos:rhel-lvm83:latest
(BadRequest) Offer with PublisherId: 'redhat' and OfferId: 'rhel-byos' not found. Consider the following solutions: 1-Check to see if offer details are correct 2- If this offer is created recently, please allow up to 30 minutes for thisoffer to be available for purchase 3- If the offer is removed from the marketplace for new purchase. See similar offers here 'https://azuremarketplace.microsoft.com/en-us/marketplace/apps?page=1%26search=redhat%20rhel-byos'. CorrelationId '75335d2a-fc28-4e4c-acd3-ec2ea423f212'.
Clearly, it is the correct information. Yet, Azure can't find the image it literally just gave me the info for. What am I missing here? I'm not looking for workarounds or "use a different image" answers. I'm looking to understand what's going on to better deal with it head on, or deliver bad news with backed up data if need be. Cheers!
I have tested the commands you have ran for both images in my subscription. I am able to see the Checkpoint image and its terms as well but not the RHEL-BYOS image terms.
Checkpoint offer is Public with pay-as-you-go azure subscription . That’s why its showing the terms for purchase . But the RHEL-BYOS [bring-your-own-subscription (BYOS) (Red Hat Gold Image) model] offer is Private.
Requirements to use RHEL BYOS images:
You must have access to Red Hat Cloud Access Program. Enable your
Red Hat subscriptions for Cloud Access at Red Hat
Subscription-Manager. You need to have on hand the Azure
subscriptions that are going to be registered for Cloud Access.
If the Red Hat subscriptions you enabled for Cloud Access meet the
eligibility requirements, your Azure subscriptions are automatically
enabled for Gold Image access.
After you finish the Cloud Access enablement steps, Red Hat
validates your eligibility for the Red Hat Gold Images. If
validation is successful, you receive access to the Gold Images
within three hours.
Reference:
Red Hat Enterprise Linux bring-your-own-subscription Azure images - Azure Virtual Machines
Red Hat Enterprise Linux Bring-Your-Own-Subscription Gold Images now Generally Available in Azure | Azure updates | Microsoft Azure

Cosmosdb Trigger for Azure Functions Application

We are developing applications using azure functions (python). I have 2 questions regarding Azure Functions Application
Can we monitor 2 collections using single Cosmosdb trigger ?
--- I have looked through the documentation and it seems it doesn't support. Did i miss anything ?
If there are 2 functions monitoring the same collection, will only one of the functions be triggered.
-- I observed this behaviour today. I was running 2 instances of functions app and the data from cosmosdb trigger was sent to only one of the application. I am trying to find out the reason for it.
EDIT:
1-I had never used multiple input triggers, but according to the official wiki it's possible, just add function.json file:
{
"bindings": [
{
"type": "queueTrigger",
"direction": "in",
"queueName": "image-resize"
},
{
"type": "blob",
"name": "original",
"direction": "in",
"path": "images-original/{name}"
},
{
"type": "blob",
"name": "resized",
"direction": "out",
"path": "images-resized/{name}"
}
]
}
PS: I know you're using cosmosDB, the sample above is just to illustrate
2- I assume it's due the way it's implemented (e.g topic vs queue). So the first function lock the event / message, then the second one is not aware of the event. At this moment, Durable Functions for python is still under development and should be released next month (03/2020). It will allow you to chain the execution of the functions just like it's available for c# / node:
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview?tabs=csharp#chaining
What you can do is output to a queue, which will trigger your second function after the first function completes. (pretty much what Durable Functions offers)

Azure User/Group provisioning with SCIM problem with boolean values

I have written an application compliant to the SCIM standard (https://www.rfc-editor.org/rfc/rfc7644), but integrating with Azure I can see that it fails to update a user if it is disabled, the request that Azure send is the following:
PATCH /Users/:id
{
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:PatchOp"
],
"Operations": [
{
"op": "Replace",
"path": "active",
"value": "False"
}
]
}
The SCIM protocol "sais" that the attribute active accept boolean values (https://www.rfc-editor.org/rfc/rfc7643#section-4.1.1), so following the PATCH protocol (https://www.rfc-editor.org/rfc/rfc6902#section-4.3) I expect a boolean value not a string with a boolean written inside it, so the expected request is the following:
PATCH /Users/:id
{
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:PatchOp"
],
"Operations": [
{
"op": "Replace",
"path": "active",
"value": false
}
]
}
So the problem is that the given value "False" should be false.
Is this a bug of Azure or am I missing something? If it is a bug, should I try to parse the string and eventually extract a boolean? But if I do that I'm going to be out of standard. How did you manage this problem?
I also spent a lot of time trying to figure out if Azure was being compliant with the SCIM spec and the answer is that they are not.
The default values that they send for PATCH requests are indeed strings, not booleans as the User JSON schema defines.
You can override the values that get send/mapped into the SCIM schema by:
Go into your provisioning app
Mappings > Synchronize Azure Active Directory Users to customappsso (the name here might be different in your directory)
Find Switch([IsSoftDeleted], "False", "True", "True", "False")
Replace with Switch([IsSoftDeleted], , false, true, true, false) (note the additional comma.)
Hit OK and SAVE
NOTE that after saving it will still see quotes around the booleans, but the PATCH request will be sent correctly.
See screenshots for reference
The default Azure implementation of SCIM isn't fully compliant with the required SCIM schema.
I found I was able to use the default NOT([IsSoftDeleted]) by using Microsoft's workaround which does aim to be SCIM compliant for PATCH operations (returns booleans rather than strings for the 'active' attribute).
This is achieved by appending the URL parameter ?aadOptscim062020 after the tenant url input.

Azure Search, listAdminKeys, ARM output error (does not support http method 'POST')

I am using this bit of code as an output object in my ARM template,
"[listAdminKeys(variables('searchServiceId'), '2015-08-19').PrimaryKey]"
Full text sample of the output section:
"outputs": {
"SearchServiceAdminKey": {
"type": "string",
"value": "[listAdminKeys(variables('searchServiceId'), '2015-08-19').PrimaryKey]"
},
"SearchServiceQueryKey": {
"type": "string",
"value": "[listQueryKeys(variables('searchServiceId'), '2015-08-19')[0]]"
}
I receive the following error during deployment (unfortunately, any error means the template deployment skips output section):
"The requested resource does not support http method 'POST'."
Checking the browser behavior seems to validate the error is related to the function (and, it using POST).
listAdminKeys using POST
How might I avoid this error and retrieve the AzureSearch admin key in the output?
Update: the goal of doing this is to gather all the relevant bits of information to plug into other scripts (.ps1) as parameters, since those resources are provisioned by this template. Would save someone from digging through the portal to copy/paste.
Thank you
You error comes from listQueryKeys, not admin keys.
https://learn.microsoft.com/en-us/rest/api/searchmanagement/adminkeys/get
https://learn.microsoft.com/en-us/rest/api/searchmanagement/querykeys/listbysearchservice
you wont be able to retrive those in the arm template, it can only "emulate" POST calls, not GET
With the latest API version, it's possible to get the query key using this:
"SearchServiceQueryKey": {
"type": "string",
"value": "[listQueryKeys(variables('searchServiceId'), '2020-06-30').value[0].key]"
}

Is it possible to point a Google Cloud DNS entry to a Google Compute Engine instance?

Could I use a reference or link to my Google Compute Engine host's logical name, instead of 173.255.x.x. in:
{
"additions": [
{
"kind": "dns#resourceRecordSet",
"name": "compute-engine-host.domain.com.",
"rrdatas": [
"173.255.x.x."
],
"ttl": 60,
"type": "A"
}
]
}
I don't believe that this is possible yet, but I'll see if we can get it on the list of things to build.
Until then, you should be able to use the Cloud DNS API's Changes.create method to have your GCE machine add/update a DNS record when it boots up...

Resources