Dashboard F5 data download - health-monitoring

In F5>Statistics>Dashboard it is possible to download raw data with the 'history' icon.
I need to download this on regular basis so automation comes into place.
I can't find such report in F5 regular report depository. I tried to link F5 to zabbix to analyze there but I don't have access to F5 backend. I set up a UI macro but I would like to have something more reliable in place.
Any tips most welcomed.
F5dashboard screenshot
Thanks!

Since you're on version 13, you have the option to load the Telemetry streaming iControl LX RPM onto BIG-IP and pull the data from any number of preconfigured systems or set up a generic pull JSON request through the API for the same data.
https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/
This is the same set of data that they use to populate the BIG-IQ Centralized Management performance and health analytics.
The caveat is depending on what you request and how much you request, it can start to tax the system so enable what you need and in chunks to see how it affects system performance. I've seen the mightiest systems grid to a halt when asked to provide ALL telemetry data to Splunk or Sumologic.
Hope this helps.

Related

Netsuite API for Application Performance Monitor SuiteApp

Is there any suitescript API to pull the Application Performance Monitor metrics from the NS database or any other alternate way to get this data. We need the data to store for future reference to optimize the transaction records. Can anyone please help in this to get the data? Thanks!
It's not public but yes you can make requests to the underlying Suitelets that power the NetSuite APM. To see this, go to one of the APM modules, inspect the page and choose the network tab. Once this is open, you can refresh the data and then inspect the request that was made to fetch the data used to populate these metrics.
Good luck!

Would Prometheus and Grafana be an incorrect tool to use for request logging, tracking and analysis?

I currently am creating a faster test harness for our team and will be recording a baseline from our prod sdk run and our staging sdk run. I am running the tests via jest and want to eventually fire the parsed requests and their query params to a datastore of sorts and have a nice UI around it for tracking.
I thought that Prometheus and Grafana would be able to provide that, but after getting a little POC for myself working yesterday it seems that this combo is more used for tracking application performance rather than request log handling/manipulation/tracking.
Is this the right tool to be using for what I am trying to achieve and if so might someone shed some light on where I might find some more reading aligned with what I am trying to do?
Prometheus does only one thing and it well. It collects metrics and store them. It is used for monitoring your infrastructure or applications to monitor performance, availability, error rates etc. You can write rules using PromQL expression to create alert based on conditions and send them to alert manager which can send it to Pager duty, slack, email or any ticketing system. Even though Prometheus comes with a UI for visualising the data it's better to use Grafana since it's pretty good with it and easy to analyse data.
If you are looking tools for distributed tracing you can check Jaeger

Is it possible to store uploaded pictures for OCR analysis in azure storage for later debugging and analysis?

Context
I have an mobile app that provides our users with the possibility to capture the name plate of our products automatically. For this I use the Azure Cognitive Services OCR service.
I am a bit worried that customers might capture pictures of insufficient quality or of the wrong area of the product (where no name plate is). To analyse whether this is the case it would be handy to have a copy of the captured pictures so we can learn what went well or what went wrong.
Question
Is it possible to not only process an uploaded picture but to also store it in Azure Storage so that I can analyse it in a later point in time?
What I've tried so far
I configured the Diagnostic settings in a way that the logs and metrics are stored into Azure Storage. As it is called, this is only logs and metrics and not the actual image. So this does not solve my issue.
Remarks
I know that I can manually implement that in the app but I think it would be better if I have to upload
the picture only once.
I'm aware that there are data protection considerations that must be made.
No, you can't add an automatic logging based only on OCR operation, you have to implement it.
But to avoid uploading it twice as you said, you could create your logic on server side, but sending the image to your api and in the api, get the image and send it to OCR while storing it in parallel.
But I guess that based on your question, you might not have any server side things in your app?

cognos monitoring

Does cognos 8 have an api which can be queried to find out:
the last time a scheduled report has run
was it successful?
if not, what caused it to fail?
it would be preferable for the api to be web based. I have read about a JMX interface but documentation was lacking.
Cognos 8 has an option to enable logging to an Audit Database. The database would then be queryable for details such as when a report ran, what parameters were used, and what if any errors there were.
Link to IBM site for setting up the logging database.
Link to IBM site for setting up the connection to the database.
Basically you create a compatible database with the proper settings, then tell Cognos how to connect to it, then the next time Cognos services start up, it will automatically create necessary tables for logging and begin populating them automatically.
Typically, to accomplish this in the API fashion, you want to use Cognos SDK. SDK allows you to query scheduling and history associated with report runs and see if the request was completed or failed. If failed, you will see History associated with the failure much like you see in the Cognos Administration section when you go to look at the failed runs.
This is a good place to start to look at a sample:
http://www-01.ibm.com/support/docview.wss?uid=swg21343791

IIS 7 Logs Vs Custom

I want to log some information about my visitors. Is it better to use the IIS generated log or to create my own in an SQL 2008 db.
I know I should probably provide more information about my specific scenario, but I'd like just generally, pros and cons of either proposal.
You can add additional information to the IIS logs from ASP.NET using HttpResponse.AppendToLog, additionally you could use the Advanced Logging Module to create your own logs with custom filters and custom data including data from Performance Counters, and more.
It all depends on what information you want to analyse.
If you're doing aggregations and rollups then you'd want to pull this data into a database for analysis. Pulling your data into a database will give you access to indexes and better querying tools.
If you're doing infrequent one-off simple queries then LogParser might be sufficient for your needs. However you'll be constantly scanning unindexed flat files looking for data which is I/O intensive.
But as you say, without knowing more about your specific scenario it's hard to say what would be best.

Resources