What do the various numbers mean on the Hawtio profile page - hawtio

Hawtio has a lovely view of a running Camel application but I can't find any information on what the numbers refer to on the profile page.
E.g. we see Mean, Min and Max.
What do these refer to? Processes per second? Throughput?
thanks

Yeah these numbers are from Apache Camel, from the MBeans that Camel offers. You can see some details here:
http://camel.apache.org/maven/current/camel-core/apidocs/org/apache/camel/api/management/mbean/package-summary.html
And yeah min / max / mean are what they say, eg the lowest / highest and average processing time.

Related

Sampling Intervals for Azure Metrics

Could you please help understand how azure metrics are calculated. For an example, we can see the request/total request graph, and a number at a given point in time (say 6000 at 4.34 PM) for Azure API Management. The request count has no meaning at a given point in time, but a measure for a given "period" of time. When i research, i found that the number represent the number of request received during the sampling interval.However, no further data is available on what the sampling interval is. Azure portal/metrics graphs has no setting to view/change sampling interval either.
So what's the sampling interval used for Azure metrics? or what does the total request count meaning at a given point in time?
(However, the application insight metrics allow you to set the sampling interval)
could you please shed some light? thanks
I think the comment from AjayKumar-MSFT is correct, so I summarize if it could help others:
Typically - ‘Requests’ - The total number of requests regardless of their resulting HTTP status code and where as ‘Requests In Application Queue’ is the number of requests in the application request queue. You can always change the 'chart Setting' for much detailed info, by going into the metric and ellipsis (...) /settings

Liferay: huge DLFileRank table

I have a Liferay 6.2 server that has been running for years and is starting to take a lot of database space, despite limited actual content.
Table Size Number of rows
--------------------------------------
DLFileRank 5 GB 16 million
DLFileEntry 90 MB 60,000
JournalArticle 2 GB 100,000
The size of the DLFileRank table sounds to me as abnormally big (if it is totally normal please let me know).
While the file ranking feature of Liferay is nice to have, we would not really mind resetting it if it halves the size of the database.
Question: Would a DELETE * FROM DLFileRank be safe? (stop Liferay, run that SQL command, maybe set dl.file.rank.enabled=false in portal-ext.properties, start Liferay again)
Is there any better way to do it?
Bonus if there is a way to keep recent ranking data and throw away only the old data (not a strong requirement).
Wow. According to the documentation here (Ctrl-F rank), I'd not have expected the number of entries to be so high - did you configure those values differently?
Set the interval in minutes on how often CheckFileRankMessageListener
will run to check for and remove file ranks in excess of the maximum
number of file ranks to maintain per user per file. Defaults:
dl.file.rank.check.interval=15
Set this to true to enable file rank for document library files.
Defaults:
dl.file.rank.enabled=true
Set the maximum number of file ranks to maintain per user per file.
Defaults:
dl.file.rank.max.size=5
And according to the implementation of CheckFileRankMessageListener, it should be enough to just trigger DLFileRankLocalServiceUtil.checkFileRanks() yourself (e.g. through the scripting console). Why you accumulate that large number of files is beyond me...
As you might know, I can never be quoted by stating that direct database manipulation is the way to go - in fact I refuse thinking about the problem from that way.

Performance testing - Jmeter results

I am using Jmeter (started using it a few days ago) as a tool to simulate a load of 30 threads using a csv data file that contains login credentials for 3 system users.
The objective I set out to achieve was to measure 30 users (threads) logging in and navigating to a page via the menu over a time span of 30 seconds.
I have set my thread group as:
Number of threads: 30
Ramp-up Perod: 30
Loop Count: 10
I ran the test successfully. Now I'd like to understand what the results mean and what is classed as good/bad measurements, and what can be suggested to improve the results. Below is a table of the results collated in the Summary report of Jmeter.
I have conducted research only to find blogs/sites telling me the same info as what is defined on the jmeter.apache.org site. One blog (Nicolas Vahlas) that I came across gave me some very useful information,but still hasn't help me understand what to do next with my results.
Can anyone help me understand these results and what I could do next following the execution of this test plan? Or point me in the right direction of an informative blog/site that will help me understand what to do next.
Many thanks.
According to me, Deviation is high.
You know your application better than all of us.
you should focus on, avg response time you got and max response frequency and value are acceptable to you and your users? This applies to throughput also.
It shows average response time is below 0.5 seconds and maximum response time is also below 1 second which are generally acceptable but that should be defined by you (Is it acceptable by your users). If answer is yes, try with more load to check scaling.
In you requirement it is mentioned that you need have 30 concurrent users performing different actions. The response time of your requests is less and you have ramp-up of 30 seconds. Can you please check total active threads during the test. I believe the time for which there will be 30 concurrent users in system is pretty short so the average response time that you are seeing seems to be misleading. I would suggest you run a test for some more time so that there will be 30 concurrent users in the system and that would be correct reading as per your requirements.
You can use Aggregate report instead of summary report. In performance testing
Throughput - Requests/Second
Response Time - 90th Percentile and
Target application resource utilization (CPU, Processor Queue Length and Memory)
can be used for analysis. Normally SLA for websites is 3 seconds but this requirement changes from application to application.
Your test results are good, considering if the users are actually logging into system/portal.
Samples: This means the no. of requests sent on a particular module.
Average: Average Response Time, for 300 samples.
Min: Min Response Time, among 300 samples (fastest among 300 samples).
Max: Max Response Time, among 300 samples (slowest among 300 samples).
Standard Deviation: A measure of the variation (for 300 samples).
Error: failure %age
Throughput: No. of request processed per second.
Hope this will help.

Logstash metrics plugin: What does events.rate_5m mean?

This is should be a fairly easy question for Logstash veterans.
When I use the metrics plugin, what does events.rate_5m mean?
Does it mean: Number of events per second in a 5 minute window?
Does it mean: Number of events every 5 minutes?
Also, what's the difference between using this over timer.rate_5m?
The documentation isn't very clear and I have problems understanding it.
Thanks in advance!
Logstash uses the Metriks library to generate the metrics.
According to that site:
A meter that measures the mean throughput and the one-, five-, and fifteen-minute exponentially-weighted moving average throughputs.
and
A timer measures the average time as well as throughput metrics via a meter.
A meter counts events and a timer is used to look at durations (you have to pass a name and a value into a timer).
To answer your specific question, the rate_5m is the per second rate over the last 5 minute sliding window.

In New Relic RPM, I get reports with an Apdex index listed. What is the subscript meaning?

This sounds ridiculous, but New Relic RPM reports an Apdex index in a form like this:
0.92(3.5)
Where the 3.5 is subscripted.
What does the 3.5 mean? I can't find the definition anywhere, and yet there it is in my reports, staring me in the face.
The bracketed/subscripted number is the threshold (in seconds) for your Apdex score. So, in your case, if the full application response (page load) is less than 3.5s then that satisfies the requirement. If your app responds slower than the threshold then your Apdex score is impacted.
This threshold is customizable, so you can select what is appropriate for your application type.
You can read more about Apdex in our docs.
The sub-scripted number is your target response time for that tier. On the user agent (browser) the high water mark is 7 seconds. You should check US-Only and make this number 2 to 4 seconds to be world class.
The app server tier must respond much faster. The high water mark default that NR sets is .5 seconds or 500 milliseconds, a world class page buffer flush would be in the 50-200 ms on average.
Remember all this information is about aggregated averages and not instance data which will have many outliers and have a broad distribution.

Resources