What are the 'averages' in Graylog extractor metrics? - graylog3

Using Graylog v3.3.5:
When I look at the metric details of an input's extractor it has these average values:
Metrics
3,684,359 total invocations since boot, averages: NaN, 0.67, 42.04.
Can anyone define what this is an average of?
The rest of the metrics look like this:
GraylogExtractorMetrics

It's respectively the One/Five/Fifteen minute rate for this extractor (rate=invocations).
You can see that in the code :
{numeral(metrics.total.rate.one_minute).format('0,0.[00]')},{' '}
{numeral(metrics.total.rate.five_minute).format('0,0.[00]')},{' '}
{numeral(metrics.total.rate.fifteen_minute).format('0,0.[00]')}.
The rate is a RateMetricsResponse which eventually brings us in com.codahale.metrics.Timer, where we can see more details about the metrics, e.g:
getFifteenMinuteRate()
Returns the fifteen-minute exponentially-weighted moving average rate
at which events have occurred since the meter was created.

Related

Azure App Service metrics aggregation for requests: why are Sum and Count different?

When looking at the metrics from our app services in Azure, I'm very confused at Sum and Count's aggregation metrics for requests. They should be the same, according to the MS tech doc.
Count: The number of measurements captured during the aggregation interval.
When the metric is always captured with the value of 1, the count aggregation is equal to the sum aggregation. This scenario is common when the metric tracks the count of distinct events and each measurement represents one event. The code emits a metric record every time a new request arrives.
And this MS tech doc as well.
Though not the case in this example, Count is equal to Sum in cases where a metric is always captured with the value of 1. This is common when a metric tracks the occurrence of a transactional event--for example, the number of HTTP failures mentioned in a previous example in this article.
So, let say, for a specific period, if there are 10 HTTP requests, the count of requests is 10, then the sum of requests is also 10.
But ours are all different. Below are one web app service's Sum and Count metrices, you can see they are very different. But why?
From offical restapi, we can see that count and sum are still different.
If you want more explanation, you can refer to the following post, or raise a support for help.
Related Post:
Azure App Service Metrics - How to interpret Sum vs. Count related to requests?

Sampling Intervals for Azure Metrics

Could you please help understand how azure metrics are calculated. For an example, we can see the request/total request graph, and a number at a given point in time (say 6000 at 4.34 PM) for Azure API Management. The request count has no meaning at a given point in time, but a measure for a given "period" of time. When i research, i found that the number represent the number of request received during the sampling interval.However, no further data is available on what the sampling interval is. Azure portal/metrics graphs has no setting to view/change sampling interval either.
So what's the sampling interval used for Azure metrics? or what does the total request count meaning at a given point in time?
(However, the application insight metrics allow you to set the sampling interval)
could you please shed some light? thanks
I think the comment from AjayKumar-MSFT is correct, so I summarize if it could help others:
Typically - ‘Requests’ - The total number of requests regardless of their resulting HTTP status code and where as ‘Requests In Application Queue’ is the number of requests in the application request queue. You can always change the 'chart Setting' for much detailed info, by going into the metric and ellipsis (...) /settings

Different reporting frequency for different types of metrics in micrometer

Can I set different reporting frequency for different types of metrics in micrometer? For example I want send endpoints metrics with 10s step and the others with 5s step.
There's a property for reporting frequency per meter registry but AFAICT, there's no concept for reporting frequency per meter. If you'd like to pursue this, you can create an issue to request the feature in its issue tracker: https://github.com/micrometer-metrics/micrometer/issues

Logstash metrics plugin: What does events.rate_5m mean?

This is should be a fairly easy question for Logstash veterans.
When I use the metrics plugin, what does events.rate_5m mean?
Does it mean: Number of events per second in a 5 minute window?
Does it mean: Number of events every 5 minutes?
Also, what's the difference between using this over timer.rate_5m?
The documentation isn't very clear and I have problems understanding it.
Thanks in advance!
Logstash uses the Metriks library to generate the metrics.
According to that site:
A meter that measures the mean throughput and the one-, five-, and fifteen-minute exponentially-weighted moving average throughputs.
and
A timer measures the average time as well as throughput metrics via a meter.
A meter counts events and a timer is used to look at durations (you have to pass a name and a value into a timer).
To answer your specific question, the rate_5m is the per second rate over the last 5 minute sliding window.

Azure Diagnostic - how to get performance counter raw data

i am looking for a way to get the raw data from a performance counter in windows azure
using the diagnostic api.
so far I've noticed that i can configured a counter from the known counters
and set the sampling rate for that counter.
Is the sampling rate configured in the diagnostics configuration is the sampling rate
that the counter calculation is based on ?
if not how can i get the raw data for that counter, since i want to get the cpu user time (for example)
and do the calculation by myself.
thanks
Each counter has a sampling frequency from 1 second whatever number. Azure will sample each instance at the given rate and capture the values and store them inside each instance. Furthermore, there is a setting that allows Azure to transfer these values from each instance onto storage account's WADPerformanceCountersTable. The transfer setting is measured in minutes and a minimum of once per minute.
To get details you want to read this:
http://convective.wordpress.com/2009/12/10/diagnostics-management-in-windows-azure/
and this:
http://convective.wordpress.com/2010/12/01/configuration-changes-to-windows-azure-diagnostics-in-azure-sdk-v1-3/

Resources