For Flush() method in Azure App Insights, I was wondering if it impacts the performance of the project?
I tried to remove Flush() and all the custom data are still sent to App Insights.So my question should be why do we need the Flush()? Can we remove it?
Flush() on TelemetryClient pushes all the data it currently has in a buffer to the App Insights service.
You can see its source code here: https://github.com/Microsoft/ApplicationInsights-dotnet/blob/3115fe1cc866a15d09e9b5f1f7f596385406433d/src/Microsoft.ApplicationInsights/TelemetryClient.cs#L593.
Normally, Application Insights will send your data in batches in the background so it uses the network more efficiently.
If you have developer mode enabled or call Flush() manually, data is sent immediately.
Typically you do not need to call Flush().
But in a case where you know the process will exit after that point, you'll want to call Flush() to make sure all the data is sent.
Related
I have an Azure function running on a timer every few minutes that after a varied amount of time of running will begin to fail every time it runs because of an external API and hitting the restart button manually in the azure portal fixes the problem and the job works again.
Is there a way to either get an azure function to restart itself or have something externally restart an azure function via a web hook or API request or running on a timer
I have tried using Azures API Management service which can be used to restart other kinds of app services in azure but it turns out there is no functionality in the API to request a restart of an azure function, Also looked into power shell and it seems to be the same problem you can restart different app services but not azure functions
i have tried working with the API
https://learn.microsoft.com/en-us/rest/api/azure/
Example API request where you can list functions within an azure function
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{name}/functions?api-version=2016-08-01
but there is no functionality to restart an azure function from what i have researched
Basically i want to Restart the Azure function as if i was to hit this button
Azure functions manual stop/start and restart buttons in azure portal
because there is a case where the job gets into a bad state every time it runs because of an external API i have no control over and hitting restart manually gets the job going again
Another way to restart your function is by using the "watchDirectories" setting in the host.json file. If your host.json looks like this:
{
"version": "2.0",
"watchDirectories": [ "Toggle" ]
}
You could toggle a restart by using following statement in a function:
System.IO.File.WriteAllText("D:/home/site/wwwroot/Toggle/restart.conf", DateTime.Now.ToString());
Looking at the logs, the function reloads as it has detected the file change in the directory:
Watched directory change of type 'Changed' detected for 'D:\home\site\wwwroot\Toggle\restart.conf'
Host configuration has changed. Signaling restart
Azure functions by their nature are called upon an event. That may be a timer, a trigger or invocation like a HTTP event. They cannot be restarted per se, i.e. if you a function throws and exception, you cannot find the specific instance and re-run it using the out of the box functionality.
However, you can engineer your way to a more reliable solution:
Replay the event that invoked the function (i.e. kick it off again)
For non-sensitive data, log the payload of the function and create a another function that can be called on demand to re-run it. I.e. you create a proxy to "re-invoke" the function.
Harden your code by implementing a retry policy. See Polly.
Add a service bus in to your architecture. Have a simple function to write the call payload to a message bus payload. Have another function to pick up the payload and process it more extensively where there may be unreliable integrations etc). That way if the call fails you can abandon and dead letter failures for later reprocessing.
Consider using Durable Function Extensions and leveraging the durable patterns, these can help make your functions code more robust and manage state.
Why don't you try below ARM API. Since Azure function also fall under App service category, sometimes this may be helpful,
https://learn.microsoft.com/en-us/rest/api/appservice/webapps/restart
In an Azure Function, when you enable telemetry to Application Insight and fire a (for example) logger.LogInformation call (where logger is an ILogger instance), does it send it to the Application Insight instance asynchronously (ie non-blocking), synchronously (blocking), or through a local log that gets drained asynchronously?
Generally, the logger would be hooked up to turn log calls into the various trackMessage or related calls in the Application Insights SDK. those messages get batched up in the AI side, and then sent after a threshold count of messages has been met, or after a certain amount of time has elapsed. the calls into application insights are all non-blocking, and will not throw exceptions (you don't want telemetry to negatively affect your real app!)
the c# sdks that azure functions would use would be here: https://github.com/Microsoft/ApplicationInsights-dotnet/
I said generally at the top, because all this depends on how the SDK is configured, and that would be up to the Azure functions underlying code. The GitHub with their info is here: https://github.com/Azure/Azure-Functions, and they have a specific wiki set up with AI info as well, here: https://github.com/Azure/Azure-Functions/wiki/App-Insights
This appears the be the relevant code for specifically how data is sent to Application Insights:
https://github.com/Microsoft/ApplicationInsights-dotnet/tree/develop/src/Microsoft.ApplicationInsights/Channel
The ILogger wraps a TelemetryClient, which sends data to an ITelemetryChannel.
The InMemoryTelemetryChannel contains the logic for how data is pooled and sent to Application Insights. As John mentioned, the channel uses a "buffer" for storing data that hasn't been sent. The buffer is flushed and the data sent asynchronously to Azure Portal when either the buffer is full or at a specific time internal (30 seconds).
I tried tracking custom metrics with and without flushing it. However, the metrics only intermittently shows up in Application Insights under the "Custom" section. First question: Is it required to run "flush()" after every single "TrackMetric(metric)" call in order for the telemetry to be sent to Application Insights? Second: Why is there this intermittent behavior? I'm only writing one metric at a time, so it's not as if I'm overloading Application Insights with thousands of separate calls. Here is my code (This is from a simple Console App):
public class Program
{
public static void Main(string[] args)
{
var telemetryClient = new TelemetryClient()
{
Context = { InstrumentationKey = "{{hidden instrumentation key}}" }
};
var metric = new MetricTelemetry
{
Name = "ImsWithContextMetric2",
Sum = 42.0
};
telemetryClient.TrackMetric(metric);
telemetryClient.Flush();
}
}
I'm also getting this strange behavior in Application Insights in which the custom metric I add shows up under a "Unavailable/deprecated Metrics" section. And a metric that I didn't even add called "Process CPU (all cores)" pops up under the "Custom" section. Any ideas why this strange behavior would occur?:
Is it required to run "flush()" after every single "TrackMetric(metric)" call in order for the telemetry to be sent to Application Insights?
Since you are using a Console Application to send events to Application Insights, which might be short-lived, it is definitely a good practice to call .Flush() every once in a while. The SDK uses the InMemoryChannel to send telemetry and sends it in batches using from an in-memory queue. So it is very important to call the .Flush() so that the data is forcefully pushed. A good practice might be to add a bit of wait after the event:
telemetryClient.Flush();
Thread.Sleep(1000);
More reading: Flushing data, Ensure you don't lose telemetry
However, the metrics only intermittently shows up in Application Insights under the "Custom" section. Why is there this intermittent behavior? I'm only writing one metric at a time, so it's not as if I'm overloading Application Insights with thousands of separate calls.
Sometimes there is a delay in metrics showing up in the Azure Portal. It can be up to a few minutes too. But if you have set it up correctly, you aren't exceeding the throttling limit, and adaptive sampling is disabled, then there is no reason for which telemetry should be intermittent. However if you still feel something is wrong, start a fiddler trace (make sure you are capturing from non-browser sessions) and check if a call is going out to dc.services.visualstudio.com. Make sure the response is 200 OK and if the items were accepted by the server.
I'm also getting this strange behavior in Application Insights in which the custom metric I add shows up under a "Unavailable/deprecated Metrics" section.
What version of the SDK are you using? I just tried out the same scenario and the custom metrics are showing up correctly.
And a metric that I didn't even add called "Process CPU (all cores)" pops up under the "Custom" section.
"Process CPU" is a performance counter which is used to track CPU utilization. I believe the SDK will only be able to track these counters if the app is running under IIS or on Azure. It probably got added internally when you created your Application Insights resource. You can ignore it since it won't have data to chart.
Hope this helps!
I have integrated Microsoft Application Insights into my Windows Forms app. In the document Application Insights on Windows Desktop apps, services and worker roles which uses the default in-memory channel, after flushing the application sleeps for one second before exiting.
tc.Flush(); // only for desktop apps
// Allow time for flushing:
System.Threading.Thread.Sleep(1000);
The document states:
Note that Flush() is synchronous for the persistence channel, but asynchronous for other channels.
As this example is using the in-memory channel, I can deduct that flushing in the code example is asynchronous, hence the sleep.
In my code I'm using the persistence channel. Just before exiting my program I'm raising an event Application Shutdown:
static void Main(string[] args)
{
try { /* application code */ }
finally
{
Telemetry.Instance.TrackEvent("Application Shutdown");
Telemetry.Instance.Flush();
System.Threading.Thread.Sleep(1000); // allow time for flushing
}
}
Sticking to the documentation, Flush is synchronous so the sleep is not needed prior to application exit. Looking at the events arriving in the Azure portal, I can see though that for most users the Application Shutdown event is not arriving in the cloud. Stepping through the debugger and stepping over Flush I cannot feel any delay also.
I'm sure that I use persistence channel because I can see data is buffered in %LOCALAPPDATA%\Microsoft\ApplicationInsights.
My questions is:
As Persistence Channel's Flush clearly is synchronous, what could be the issue that the last events of every application run are not displayed in Azure?
If I remember this correctly, Flush() synchronously writes the remaining telemetry to the buffer (%LOCALAPPDATA% in case of the persistent channel), but it does not initiate any delivery action. I would expect this telemetry to show up later on with the next application start if the buffer location does not change because AI will read the buffered data and will send it out.
I might be mistaken here, the logic behind this could've been changed a while ago..
Since version 1.3 of the Azure SDK we have to set the configuration publisher within our web application (e.g. global.asax) and not webrole.cs. Is the same true for hooking up RoleEnvironment.Changed/Changing events?
It depends. Your web application runs in a different process than your WebRole.cs meaning you'll need to handle it in one of these (or both) depending on the use case.
An example: Let's assume you have a static property in your global.asax that holds an object. This object has been initialized with information coming from your service configuration. Then a few days later you modify this configuration in the portal (maybe a connection string). This will raise the RoleEnvironment.Changing event. In that case, you'll need to handle that event in the web application (global.asax) to re-initialize the static object with the new configuration information.
Note that a web application is not always active, it's only fired up after the first request (you can modify this though, but this is the default behavior). Meaning that in some cases you might not be able to handle the event in the web application because the process is not active. If handling the event is crucial for you, you should consider handling it in the WebRole.cs