APIM resource health history beyond 4 weeks - azure

We've been using APIM for years, it's generally not a bad platform, pretty stable and reliable. We recently onboarded a fairly big customer and, according to the best practices of the Murphy's law, APIM went down for almost an hour on one of the first days. Which, obviously, made no one happy.
APIM has been fine and dandy before and after the incident, but the Health history only goes back 4 weeks. It would help to show logs demonstrating that it was an outlier event. Is there a way to get the Health history months or years back?

How about using ADF Service, I have tried using ADF, it's possible and here is the implementation details.
Using ADF - Copy data activity, I've configured for storing the Health History in the Azure Blob storage, the following workaround I did:
My APIM Instance Health History in the Portal:
Created APIM Instance > Added few of the Function APIs > Tested
Created Storage Account > Blob Storage > Container
Created ADF Pipeline > Copy Data Activity >
Source: I have Selected the REST API of the Azure Resource and given the API URL with its authorization header:
Sink: Added the Blob Storage as data set > Selected the container > Given the sample file name as TestJson.json as I have selected the JSON Format while mapping.
Mappings: Clicked on Import Schemas and added respective keys to the schema cells:
Note: You can make Import Schemas to None so that it will add automatically all the data.
Validate/Run the Pipeline:
Result - Azure Resource Health History in the Blob Storage:

Related

Azure Stream Analytics job path pattern for ApplicationInsights data in storage container

My goal is to import telemetry data from an ApplicationInsights resource into a SQL Azure database.
To do so, I enabled allLogs and AllMetrics in the Diagnostic settings of the ApplicationInsights instance, and set the Destination details to "Archive to a storage account".
This works fine, and I can see data being saved to containers beneath the specified storage account, as expected. For example, page views are successfully written to the insights-logs-apppageviews container.
My understanding is that I can use StreamAnalytics job(s) from here to import these JSON files into SQL Azure by specifying a container as input, and a SQL Azure table as output.
The problems I encounter from here are twofold:
I don't know what to use for the "Path pattern" on the input resource: there are available tokens to use for {date} and {time}, but the actual observed container path is slightly different, and I'm not sure how to account for this discrepancy. For example, the {date} token expects the YYYY/MM/DD format, but that part of the observed path is of the format /y={YYYY}/m={MM}/d={DD}. If I try to use the {date} token, nothing is found. As far as I can tell, there doesn't appear to be any way to customize this.
For proof-of-concept purposes, I resorted to using a hard-coded container path with some data in it. With this, I was able to set the output to a SQL Azure table, check that there were no schema errors between input and output, and after starting the job, I do see the first batch of data loaded into the table. However, if I perform additional actions to generate more telemetry, I see the JSON files updating in the storage containers, yet no additional data is written to the table. No errors appear in the Activity Log of the job to explain why the updated telemetry data is not being picked up.
What settings do I need to use in order for the job to run continuously/dynamically and update the database as expected?

Upscaling/Downscaling provisioned RU for cosmos containers at specific time

As mentioned in the Microsoft documentation there is support to increase/decrease the provisioned RU of cosmos containers using cosmosDB Java SDK but when I am trying to perform the steps I am getting below error:
com.azure.cosmos.CosmosException: {"innerErrorMessage":"\"Operation 'PUT' on resource 'offers' is not allowed through Azure Cosmos DB endpoint. Please switch on such operations for your account, or perform this operation through Azure Resource Manager, Azure Portal, Azure CLI or Azure Powershell\"\r\nActivityId: 86fcecc8-5938-46b1-857f-9d57b7, Microsoft.Azure.Documents.Common/2.14.0, StatusCode: Forbidden","cosmosDiagnostics":{"userAgent":"azsdk-java-cosmos/4.28.0 MacOSX/10.16 JRE/1.8.0_301","activityId":"86fcecc8-5938-46b1-857f-9d57b74c6ffe","requestLatencyInMs":89,"requestStartTimeUTC":"2022-07-28T05:34:40.471Z","requestEndTimeUTC":"2022-07-28T05:34:40.560Z","responseStatisticsList":[],"supplementalResponseStatisticsList":[],"addressResolutionStatistics":{},"regionsContacted":[],"retryContext":{"statusAndSubStatusCodes":null,"retryCount":0,"retryLatency":0},"metadataDiagnosticsContext":{"metadataDiagnosticList":null},"serializationDiagnosticsContext":{"serializationDiagnosticsList":null},"gatewayStatistics":{"sessionToken":null,"operationType":"Replace","resourceType":"Offer","statusCode":403,"subStatusCode":0,"requestCharge":"0.0","requestTimeline":[{"eventName":"connectionAcquired","startTimeUTC":"2022-07-28T05:34:40.472Z","durationInMicroSec":1000},{"eventName":"connectionConfigured","startTimeUTC":"2022-07-28T05:34:40.473Z","durationInMicroSec":0},{"eventName":"requestSent","startTimeUTC":"2022-07-28T05:34:40.473Z","durationInMicroSec":5000},{"eventName":"transitTime","startTimeUTC":"2022-07-28T05:34:40.478Z","durationInMicroSec":60000},{"eventName":"received","startTimeUTC":"2022-07-28T05:34:40.538Z","durationInMicroSec":1000}],"partitionKeyRangeId":null},"systemInformation":{"usedMemory":"71913 KB","availableMemory":"3656471 KB","systemCpuLoad":"empty","availableProcessors":8},"clientCfgs":{"id":1,"machineId":"uuid:248bb21a-d1eb-46a5-a29e-1a2f503d1162","connectionMode":"DIRECT","numberOfClients":1,"connCfg":{"rntbd":"(cto:PT5S, nrto:PT5S, icto:PT0S, ieto:PT1H, mcpe:130, mrpc:30, cer:false)","gw":"(cps:1000, nrto:PT1M, icto:PT1M, p:false)","other":"(ed: true, cs: false)"},"consistencyCfg":"(consistency: Session, mm: true, prgns: [])"}}}
at com.azure.cosmos.BridgeInternal.createCosmosException(BridgeInternal.java:486)
at com.azure.cosmos.implementation.RxGatewayStoreModel.validateOrThrow(RxGatewayStoreModel.java:440)
at com.azure.cosmos.implementation.RxGatewayStoreModel.lambda$toDocumentServiceResponse$0(RxGatewayStoreModel.java:347)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:106)
at reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber.onNext(FluxSwitchIfEmpty.java:74)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:200)
at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:119)
Message says to switch on such operations for your accounts but I could not find any page to do that. Can I use Azure functions to do the same thing at a specific time?
Code snippet:
CosmosAsyncContainer container = client.getDatabase("DatabaseName").getContainer("ContainerName");
ThroughputProperties autoscaleContainerThroughput = container.readThroughput().block().getProperties();
container.replaceThroughput(ThroughputProperties.createAutoscaledThroughput(newAutoscaleMaxThroughput)).block();
This is because disableKeyBasedMetadataWriteAccess is set to true on the account. You will need to contact either your subscription owner or someone with DocumentDB Account Contributor to modify the throughput using PowerShell or azure cli, links to samples. You can also do this by redeploying the ARM template or Bicep file used to create the account (be sure to do a GET first on the resource so you don't accidentally change something.
If you are looking for a way to automatically scale resources up and down on a schedule, please refer to this sample here, Scale Azure Cosmos DB throughput by using Azure Functions Timer trigger
To learn more about the disableKeyBasedMetadataWriteAccess property and it's impact to control plane operations from the data plane SDK's see, Preventing changes from the Azure Cosmos DB SDKs

Azure Data Factory: event not starting pipeline

I've set up a Azure Data Factory pipeline containing a copy activity. For testing purposes both source and sink are Azure Blob Storages.
I wan't to execute the pipeline as soon as a new file is created on the source Azure Blob Storage.
I've created a trigger of type BlovEventsTrigger. Blob path begins with has been set to //
I use Cloud Storage Explorer to upload files but it doesn't trigger my pipeline. To get an idea of what is wrong, how can I check if the event is fired? Any idea what could be wrong?
Thanks
Reiterating what others have stated:
Must be using a V2 Storage Account
Trigger name must only contain letters, numbers and the '-' character (this restriction will soon be removed)
Must have registered subscription with Event Grid resource provider (this will be done for you via the UX soon)
Trigger makes the following properties available #triggerBody().folderPath and #triggerBody().fileName. To use these in your pipeline your must map them to pipeline paramaters and use them as such: #pipeline().parameters.paramaetername.
Finally, based on your configuration setting blob path begins with to // will not match any blob event. The UX will actually show you an error message saying that that value is not valid. Please refer to the Event Based Trigger documentation for examples of valid configuration.
Please reference this. First, it needs to be a v2 storage. Second, you need register it with event grid.
https://social.msdn.microsoft.com/Forums/azure/en-US/db332ac9-2753-4a14-be5f-d23d60ff2164/azure-data-factorys-event-trigger-for-pipeline-not-working-for-blob-creation-deletion-most-of-the?forum=AzureDataFactory
There seems to be a bug with Blob storage trigger, if you have more than one trigger is allocated to the same blob container, none of the triggers will fire.
For some reasons (another bug, but this time in Data factories?), if you edit several times your trigger in the data factory windows, the data factory seems to loose track of the triggers it creates, and your single trigger may end up creating multiple duplicate triggers on the blob storage. This condition activates the first bug discussed above: the blob storage trigger doesn't trigger anymore.
To fix this, delete the duplicate triggers. For that, navigate to your blob storage resource in the Azure portal. Go to the Events blade. From there you'll see all the triggers that the data factories added to your blob storage. Delete the duplicates.
And now, on 20.06.2021, same for me: event trigger is not working, though when editing it's definition in DF, it shows all my files in folder, that matches. But when i add new file to that folder, nothing happens!
If you're creating your trigger via arm template, make sure you're aware of this bug. The "runtimeState" (aka "Activated") property of the trigger can only be set as "Stopped" via arm template. The trigger will need to be activated via powershell or the ADF portal.
An event grid resource provider needs to have been registered, within the specific azure subscription.
Also if you use Synapse Studio pipelines instead of Data Factory (like me) make sure the Data Factory resource provider is also registered.
Finally, the user should have both 'owner' and 'storage blob data contributor' on the storage account.

How can I get the date of creation of my azure site

I have a site on Azure but I am not able to see in the azure portal the date of creation and activation of my site. I need this for legal purpose.
You may review your Activity Logs- Through this, you can determine:
• what operations were taken on the resources in your subscription
• who initiated the operation (although operations initiated by a backend service do not return a user as the caller)
• when the operation occurred (in your case WebApps)
• the status of the operation
• the values of other properties that might help you research the operation
The activity log contains all write operations (PUT, POST, DELETE) performed on your resources. It does not include read operations (GET). You can use the audit logs to find an error when troubleshooting or to monitor how a user in your organization modified a resource.
Note: Activity logs are retained for 90 days. You can query for any range of dates, as long as the starting date is not more than 90 days in the past.
As per my understanding you need the creation date of your Azure WebAPP.
If you use Get-AzureRmWebApp it will return last modified, so it will not help.
What you can do is get the time-stamp (for each deployment operation), by using the Command below.
Get-AzureRmResourceGroup -Name <"ResourceGroup name">| Get-AzureRmResourceGroupDeployment
The deployment time should be the same as creation time.
I hope this helps.
I haven't found a great way to do this either, but similar to Clavin's answer, you can use the Azure CLI, e.g.:
az deployment group list -g <your group here> --query "[?starts_with(name, 'website_deployment_')]"
That will list all the web app deployments - the name of the app is inside the properties.parameters under a key e.g. appService_<name>_name - not sure why MS made this so awfully complicated.

How can I programatically (C#) read the autoscale settings for a WebApp?

I'm trying to build a small program to change the autoscale settings for our Azure WebApps, using the Microsoft.WindowsAzure.Management.Monitoring and Microsoft.WindowsAzure.Management.WebSites NuGet packages.
I have been roughly following the guide here.
However, we are interested in scaling WebApps / App Services rather than Cloud Services, so I am trying to use the same code to read the autoscale settings but providing a resource ID for our WebApp. I have already got the credentials required for making a connection (using a browser window popup for Active Directory authentication, but I understand we can use X.509 management certificates for non-interactive programs).
This is the request I'm trying to make. Credentials already established, and an exception is thrown earlier if they're not valid.
AutoscaleClient autoscaleClient = new AutoscaleClient(credentials);
var resourceId = AutoscaleResourceIdBuilder.BuildWebSiteResourceId(webspaceName: WebSpaceNames.NorthEuropeWebSpace, serverFarmName: "Default2");
AutoscaleSettingGetResponse get = autoscaleClient.Settings.Get(resourceId); // exception here
The WebApp (let's call it "MyWebApp") is part of an App Service Plan called "Default2" (Standard: 1 small), in a Resource Group called "WebDevResources", in the North Europe region. I expect that my problem is that I am using the wrong names to build the resourceId in the code - the naming conventions in the library don't map well onto what I can see in the Azure Portal.
I'm assuming that BuildWebSiteResourceId is the correct method to call, see MSDN documentation here.
However the two parameters it takes are webspaceName and serverFarmName, neither of which match anything in the Azure portal (or Google). I found another example which seemed to be using the WebApp's geo region for webSpaceName, so I've used the predefined value for North Europe where our app is hosted.
While trying to find the correct value for serverFarmName in the Azure Portal, I found the Resource ID for the App Service Plan, which looks like this:
/subscriptions/{subscription-guid}/resourceGroups/WebDevResources/providers/Microsoft.Web/serverfarms/Default2
That resource ID isn't valid for the call I'm trying to make, but it does support the idea that a 'serverfarm' is the same as an App Service Plan.
When I run the code, regardless of whether the resourceId parameters seem to be correct or garbage, I get this error response:
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
{"Code":"SettingNotFound","Message":"Could not find the autoscale settings."}
</string>
So, how can I construct the correct resource ID for my WebApp or App Service Plan? Or alternatively, is there a different tree I should be barking up to programatially manage WebApp scaling?
Update:
The solution below got the info I wanted. I also found the Azure resource explorer at resources.azure.com extremely useful to browse existing resources and find the correct names. For example, the name for my autoscale settings is actually "Default2-WebDevResources", i.e. "{AppServicePlan}-{ResourceGroup}" which I wouldn't have expected.
There is a preview service https://resources.azure.com/ where you can inspect all your resources easily. If you search for autoscale in the UI you will easily find the settings for your resource. It will also show you how to call the relevant REST Api endpoint to read or update that resorce.
It's a great tool for revealing a lot of details for your deployed resources and it will actually give you an ARM template stub for the resource you are looking at.
And to answer your question, you could programmatically call the REST API from a client with updated settings for autoscale. The REST API is one way of doing this, the SDK another and PowerShell a third.
The guide which you're following is based on the Azure Service Management model, aka Classic mode, which is deprecated and only exists mainly for backward compatibility support.
You should use the latest
Microsoft.Azure.Insights nuget package for getting the autoscale settings.
Sample code using the nuget above is as below:
using Microsoft.Azure.Management.Insights;
using Microsoft.Rest;
//... Get necessary values for the required parameters
var client = new InsightsManagementClient(new TokenCredentials(token));
client.AutoscaleSettings.Get(resourceGroupName, autoScaleSettingName);
Besides, the autoscalesettings is a resource under the "Microsoft.Insights" provider and not under the "Microsoft.Web" provider, which explains why you are not able to find it with your serverfarm resourceId.
See the REST API Reference below for getting the autoscale settings.
GET
https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/microsoft.insights/autoscaleSettings/{autoscale-setting-name}?api-version={api-version}

Resources