Set-AzVMDiagnosticsExtension doesn't work work as expected across subscriptions - azure

what I'm trying to is to enable VM Diagnostic extension to send Event logs (Application [1,2,3], Security [all], System [1,2,3]) to one unified storage account (let's call logs storage) where WADWindowsEventLogsTable is supposed to be created.
different scenarios I'm trying to implement :
VM is in the same resource group where logs storage is.
The result : works
VM in a different resource group where logs storage is.
The result : works
VM in a different subscription
The result : the extension will be enabled. However, when go to Agent tab, I'll get the error message "the value must not be empty" under Storage account section
agent tab, storage account section error
Environment
Windows
Powershell 7.0.2
DiagnosticsConfiguration.json
{
"PublicConfig": {
"WadCfg": {
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": 5120,
"WindowsEventLog": {
"scheduledTransferPeriod": "PT1M",
"DataSource": [
{
"name": "Application!*[System[(Level=1 or Level=2 or Level=3 or Level=4)]]"
},
{
"name": "Security!*"
},
{
"name": "System!*[System[(Level=1 or Level=2 or Level=3 or Level=4)]]"
}
]
}
}
},
"StorageAccount": "logsstorage",
"StorageType": "TableAndBlob"
},
"PrivateConfig": {
"storageAccountName": "logsstorage",
"storageAccountKey": "xxxxxxx",
"storageAccountEndPoint": "https://logsstorage.blob.core.windows.net"
}
}
Powershell commands :
Set-AzVMDiagnosticsExtension -ResourceGroupName "myvmresourcegroup" -VMName "myvm" -DiagnosticsConfigurationPath "DiagnosticsConfiguration.json"
I even tried to explicitly specifying account name and key as :
$storage_key = "xxxxxx"
Set-AzVMDiagnosticsExtension -ResourceGroupName "myvmresourcegroup" -VMName "myvm" -DiagnosticsConfigurationPath "DiagnosticsConfiguration.json" -StorageAccountName "logsstroage" -StorageAccountKey $storage_key
I've spent a lot of time trying to figure out the issue without luck.
The real issue here is that the extension doesn't create the expected table WADWindowsEventLogsTable (or write to it if it's already exist)
According to the official documentation I should be able to do this, example 3 :
https://learn.microsoft.com/en-us/powershell/module/az.compute/set-azvmdiagnosticsextension?view=azps-4.3.0
I've submitted an issue with the team on GitHub and gave more details, but still waiting for their input
https://github.com/Azure/azure-powershell/issues/12259

This is because the storage account "logsstorage" you specify is in another subscription.
You should have selected a different subscription to enable VM Diagnostic extension. So you also need to modify your DiagnosticsConfiguration.json file and specify a storage account which is in the current subscription.

I managed to get this fixed with some help from Microsoft engineer.
I've detailed the answer in this GitHub issue :
Set-AzVMDiagnosticsExtension doesn't seem working properly across subscriptions
The answer :
I managed to get this work, thanks for the help from #prernavashistha from Microsoft support it turned out there's some inconsistency in the documentations.
According to the documentation here :
https://learn.microsoft.com/en-us/azure/azure-monitor/platform/diagnostics-extension-windows-install#powershell-deployment
In PrivateConfig I should pass the storage URI to "storageAccountEndPoint" key :
"PrivateConfig": {
"storageAccountEndPoint": "https://logsstorage.blob.core.windows.net"}
However, according to another documentation reference :
https://learn.microsoft.com/en-us/azure/azure-monitor/platform/diagnostics-extension-schema-windows#json
I should pass the Azure storage endpoint :
"PrivateConfig": {
"storageAccountEndPoint": "https://core.windows.net"}
I can confirm that using Azure storage endpoint resolved the issue, and I can enable the extension across subscriptions, and I can see logs being written to the correct table as expected.
Thanks

Related

How to get Time zone based on region in azure using azure .Net SDK/Fluent API/ REST API?

I am doing auto-shutdown for VM using ARM template with .Net core Now I want to show dropdown for time zone which azure portal show based on VM deployed in which Region .
I am attaching screenshot.
The REST API endpoint
https://management.azure.com/subscriptions/{yourSubscriptionId}/resourceGroups/{yourResourceGroup}/providers/Microsoft.Compute/virtualMachines/{yourVMName}?$expand=instanceView&api-version=2020-12-01
should return something including:
"osProfile": {
"computerName": "computername",
"adminUsername": "adminusername",
"windowsConfiguration": {
"provisionVMAgent": true,
"enableAutomaticUpdates": true,
"patchSettings": {
"patchMode": "AutomaticByOS"
}
},
Now, you don't see timeZone there - the REST API documentation says it should be under windowsConfiguration. I'm speculating that it may be missing because I haven't changed anything, and that it is defaulting.
You can also run
$vm = Get-AzVM -Name {vmName}
$vm.OSProfile.WindowsConfiguration

ARM template error Bad JSON content found in the request

I am trying to deploy an ARM template using the Azure DevOps release pipeline. Azure KeyVault is one of the resources in the template. the deployment is successful when I use the Powershell script. however, when Azure DevOps Release pipeline is used, deployment fails with error "Bad JSON content found in the request"
The key vault resource definition is as below.
{
"type": "Microsoft.KeyVault/vaults",
"apiVersion": "2018-02-14",
"name": "[parameters('keyVaultName')]",
"location": "[parameters('location')]",
"tags": {
"displayName": "KeyVault"
},
"properties": {
"enabledForDeployment": "[parameters('enabledForDeployment')]",
"enabledForTemplateDeployment": "[parameters('enabledForTemplateDeployment')]",
"enabledForDiskEncryption": "[parameters('enabledForDiskEncryption')]",
"tenantId": "[parameters('tenantId')]",
"accessPolicies": [],
"sku": {
"name": "[parameters('skuName')]",
"family": "A"
}
}
}
Update: I suspected that it could be because of tenant id and hardcoded the tenant id to test. But still no luck.
According to the log, you are specifying the override parameters in the task. That's why you are using the ARM template I provided, but still facing the Bad request error. Because in the task logic, the script which in ARM files is the request body of the API. And we use this API to create a resource you specified in azure. For detailed task logic described, you can refer my previous answer.
The parameter definition in the ARM template is correct, but now, the error caused by the override parameters specified:
More specifically, the error is because of the subscription().tenantId in your parameter override definition.
You can try to use Write-Host subscription().tenantId to get its value and print out by using Azure powershell task. You will see that it could not get any thing. One word, this can only used in Json file instead of used in task.
So now, because of no value get from this expression, also you have override the previous value which defined in the JSON file. It will lack the key parameter value(tenantId) in the request body when the task is going to create a azure resource with API.
There has 2 solution can solve it.
1. Do not try to override the parameters which its value is using expression.
Here I just mean the parameter that relevant with Azure subscription. Most of the expression could not be compiled in the Azure ARM deploy task.
2. If you still want to override these special parameters with the special expression in the task.
If this, you must add one task firstly, to get the tenantId from that. Then pass it into the ARM deploy task.
You can add Azure Powershell task by using the following sample script:
Write-Output "Getting tenantId using Get-AzureRmSubscription..."
$subscription = (Get-AzureRmSubscription -SubscriptionId $azureSubscriptionId)
Write-Output "Requested subscription: $azureSubscriptionId"
$subscriptionId = $subscription.Id
$subscriptionName = $subscription.Name
$tenantId = $subscription.tenantId
Write-Output "Subscription Id: $subscriptionId"
Write-Output "Subscription Name: $subscriptionName"
Write-Output "Tenant Id: $tenantId"
Write-Host "##vso[task.setvariable variable=TenantID;]$$tenantId"
Then in the next task, you can use $(TenantID) to get its value.
Here you can refer to this two excellent blog: Blog1 and Blog2
I still recommend you to use the first solution since the volume of the pipeline will increase and complicate if choosing the second solution.

Serviceprincipal template for Azure Datalake analytics in datafactory

on this page:
https://learn.microsoft.com/en-us/azure/data-factory/v1/data-factory-usql-activity
there is a template for using Azure Datalake analytics in azure datafactory with service principal (instead of authorizing manually for each use).
the template looks like this:
{
"name": "AzureDataLakeAnalyticsLinkedService",
"properties": {
"type": "AzureDataLakeAnalytics",
"typeProperties": {
"accountName": "adftestaccount",
"dataLakeAnalyticsUri": "azuredatalakeanalytics.net",
"servicePrincipalId": "<service principal id>",
"servicePrincipalKey": "<service principal key>",
"tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
"subscriptionId": "<optional, subscription id of ADLA>",
"resourceGroupName": "<optional, resource group name of ADLA>"
}
}
}
This template does not work in azure data factory, it insists that for the type
"AzureDataLakeAnalytics", it is not possible to have "serviceprincipalid" and it still requires "authorization" as a property.
my question is:
what is the correct json template for configuring a AzureDataLakeAnalyticsLinkedService with a serviceprincipal ?
Ok, sorry for asking a question that i figured out myself in the end.
While it is true that the azure portal complains about the template it does allow you deploy it. I had of course tried this, but since the azure portal does not show the error message, only an error flag, i did not realize the error was from the service principals lack of permission and not from the template it complained about.
So by adding more permissions to the service principal and deploying the json, disregarding the compiler complaints. It did work. Sorry for bothering.

Azure Logic App creation of Redis Cache requires x-ms-api-version

I'm building an Azure Logic App and try to automate the creation of an Azure Redis Cache. There is a specific action for this (Create or update resource) which I was able to bring up:
As you can see I entered 2016-02-01 as the api version. I was trying different values here just guessing from other api versions I know from Microsoft. I can't find any resource on this on the internet. The result of this step will be:
{
"error":
{
"code": "InvalidResourceType",
"message": "The resource type could not be found in the namespace 'Microsoft.Cache' for api version '2016-02-01'."
}
}
What is the correct value for x-ms-api-version and where can I find the history for this value based on the resource provider?
Try
Resource Provider: Microsoft.Cache
Name: Redis/<yourrediscachename>
x-ms-api-version: 2017-02-01
One easy way to know the supported versions for each resource type is using CLI on your Azure Portal, e.g.
az provider show --namespace Microsoft.Cache --query "resourceTypes[?resourceType=='Redis'].apiVersions | [0]"
would return:
[
"2017-02-01",
"2016-04-01",
"2015-08-01",
"2015-03-01",
"2014-04-01-preview",
"2014-04-01"
]
I made it work with:
HTH

Why can't configure Azure diagnostics to use Azure Table Storage via new Azure Portal?

I am developing a web api which will be hosted in Azure. I would like to use Azure diagnostics to log errors to Azure table storage.
In the Classic portal, I can configure the logs to go to Azure table storage.
Classic Portal Diagnostic Settings
However in the new Azure portal, the only storage option I have is to use Blob storage:
New Azure Portal Settings
It seems that if I was to make use of a web role, I could configure the data store for diagnostics but as I am developing a web api, I don't want to create a separate web role for every api just so that I can log to an azure table.
Is there a way to programmatically configure azure diagnostics to propagate log messages to a specific data store without using a web role? Is there any reason why the new Azure portal only has diagnostic settings for blob storage and not table storage?
I can currently work around the problem by using the classic portal but I am worried that table storage for diagnostics will eventually become deprecated since it hasn't been included in the diagnostic settings for the new portal.
(I'll do some necromancy on this question as this was the most relevant StackOverflow question I found while searching for a solution to this as it is no longer possible to do this through the classic portal)
Disclaimer: Microsoft has seemingly removed support for logging to Table in the Azure Portal, so I don't know if this is deprecated or will soon be deprecated, but I have a solution that will work now (31.03.2017):
There are specific settings determining logging, I first found out information on this from an issue in the Azure Powershell github: https://github.com/Azure/azure-powershell/issues/317
The specific settings we need are (from github):
AzureTableTraceEnabled = True, & AppSettings has:
DIAGNOSTICS_AZURETABLESASURL
Using the excellent resource explorer (https://resources.azure.com) under (GUI navigation):
/subscriptions/{subscriptionName}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{siteName}/config/logs
I was able to find the Setting AzureTableTraceEnabled in the Properties.
The property AzureTableTraceEnabled has Level and sasURL. In my experience updating these two values (Level="Verbose",sasUrl="someSASurl") will work, as updating the sasURL sets DIAGNOSTICS_AZURETABLESASURL in appsettings.
How do we change this? I did it in Powershell. I first tried the cmdlet Get-AzureRmWebApp, but could not find what i wanted - the old Get-AzureWebSite does display AzureTableTraceEnabled, but I could not get it to update (perhaps someone with more powershell\azure experience can come with input on how to use the ASM cmdlets to do this).
The solution that worked for me was setting the property through the Set-AzureRmResource command, with the following settings:
Set-AzureRmResource -PropertyObject $PropertiesObject -ResourceGroupName "$ResourceGroupName" -ResourceType Microsoft.Web/sites/config -ResourceName "$ResourceName/logs" -ApiVersion 2015-08-01 -Force
Where the $PropertiesObject looks like this:
$PropertiesObject = #{applicationLogs=#{azureTableStorage=#{level="$Level";sasUrl="$SASUrl"}}}
The Level corresponds to "Error", "Warning", "Information", "Verbose" and "Off".
It is also possible to do this in the ARM Template (important bits is in properties on the logs resource in the site):
{
"apiVersion": "2015-08-01",
"name": "[variables('webSiteName')]",
"type": "Microsoft.Web/sites",
"location": "[resourceGroup().location]",
"tags": {
"displayName": "WebApp"
},
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms/', variables('hostingPlanName'))]"
],
"properties": {
"name": "[variables('webSiteName')]",
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
},
"resources": [
{
"name": "logs",
"type": "config",
"apiVersion": "2015-08-01",
"dependsOn": [
"[resourceId('Microsoft.Web/sites/', variables('webSiteName'))]"
],
"tags": {
"displayName": "LogSettings"
},
"properties": {
"azureTableStorage": {
"level": "Verbose",
"sasUrl": "SASURL"
}
}
}
}
The issue with doing it in ARM is that I've yet to find a way to generate the correct SAS, it is possible to fetch out Azure Storage Account keys (from: ARM - How can I get the access key from a storage account to use in AppSettings later in the template?):
"properties": {
"type": "AzureStorage",
"typeProperties": {
"connectionString": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value)]"
}
}
There are also some clever ways of generating them using linked templates (from: http://wp.sjkp.dk/service-bus-arm-templates/).
The current solution I went for (time constraints) was a custom Powershell script that looks something like this:
...
$SASUrl = New-AzureStorageTableSASToken -Name $LogTable -Permission $Permissions -Context $StorageContext -StartTime $StartTime -ExpiryTime $ExpiryTime -FullUri
$PropertiesObject = #{applicationLogs=#{azureTableStorage=#{level="$Level";sasUrl="$SASUrl"}}}
Set-AzureRmResource -PropertyObject $PropertiesObject -ResourceGroupName "$ResourceGroupName" -ResourceType Microsoft.Web/sites/config -ResourceName "$ResourceName/logs" -ApiVersion 2015-08-01 -Force
...
This is quite an ugly solution, as it is something extra you need to maintain in addition to the ARM template - but it is easy, fast and it works while we wait for updates to the ARM Templates (or for someone cleverer than I to come and enlighten us).
We don't typically recommend using Tables for log data - it can result in the append only pattern which at scale doesn't work effectively for Table Storage. See the log-data anti-pattern in this guide Table Design Guide. Often times we see that even though people think of log data as structured - they way they typically query it makes Blobs more efficient.
Excerpt from the Design Guide:
Log data anti-pattern
Typically, you should use the Blob service instead of the Table service to store log data.
Context and problem
A common use case for log data is to retrieve a selection of log entries for a specific date/time range: for example, you want to find all the error and critical messages that your application logged between 15:04 and 15:06 on a specific date. You do not want to use the date and time of the log message to determine the partition you save log entities to: that results in a hot partition because at any given time, all the log entities will share the same PartitionKey value (see the section Prepend/append anti-pattern).
...
Solution
The previous section highlighted the problem of trying to use the Table service to store log entries and suggested two, unsatisfactory, designs. One solution led to a hot partition with the risk of poor performance writing log messages; the other solution resulted in poor query performance because of the requirement to scan every partition in the table to retrieve log messages for a specific time span. Blob storage offers a better solution for this type of scenario and this is how Azure Storage Analytics stores the log data it collects.
This section outlines how Storage Analytics stores log data in blob storage as an illustration of this approach to storing data that you typically query by range.
Storage Analytics stores log messages in a delimited format in multiple blobs. The delimited format makes it easy for a client application to parse the data in the log message.
Storage Analytics uses a naming convention for blobs that enables you to locate the blob (or blobs) that contain the log messages for which you are searching. For example, a blob named "queue/2014/07/31/1800/000001.log" contains log messages that relate to the queue service for the hour starting at 18:00 on 31 July 2014. The "000001" indicates that this is the first log file for this period. Storage Analytics also records the timestamps of the first and last log messages stored in the file as part of the blob’s metadata. The API for blob storage enables you locate blobs in a container based on a name prefix: to locate all the blobs that contain queue log data for the hour starting at 18:00, you can use the prefix "queue/2014/07/31/1800."
Storage Analytics buffers log messages internally and then periodically updates the appropriate blob or creates a new one with the latest batch of log entries. This reduces the number of writes it must perform to the blob service.
If you are implementing a similar solution in your own application, you must consider how to manage the trade-off between reliability (writing every log entry to blob storage as it happens) and cost and scalability (buffering updates in your application and writing them to blob storage in batches).
Issues and considerations
Consider the following points when deciding how to store log data:
If you create a table design that avoids potential hot partitions, you may find that you cannot access your log data efficiently.
To process log data, a client often needs to load many records.
Although log data is often structured, blob storage may be a better solution.

Resources